60 research outputs found

    Sparse Regression Codes for Multi-terminal Source and Channel Coding

    Full text link
    We study a new class of codes for Gaussian multi-terminal source and channel coding. These codes are designed using the statistical framework of high-dimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of columns of a design matrix. These codes were recently introduced by Barron and Joseph and shown to achieve the channel capacity of AWGN channels with computationally feasible decoding. They have also recently been shown to achieve the optimal rate-distortion function for Gaussian sources. In this paper, we demonstrate how to implement random binning and superposition coding using sparse regression codes. In particular, with minimum-distance encoding/decoding it is shown that sparse regression codes attain the optimal information-theoretic limits for a variety of multi-terminal source and channel coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing - 201

    An Achievable Rate Region for the Broadcast Channel with Feedback

    Full text link
    A single-letter achievable rate region is proposed for the two-receiver discrete memoryless broadcast channel with generalized feedback. The coding strategy involves block-Markov superposition coding, using Marton's coding scheme for the broadcast channel without feedback as the starting point. If the message rates in the Marton scheme are too high to be decoded at the end of a block, each receiver is left with a list of messages compatible with its output. Resolution information is sent in the following block to enable each receiver to resolve its list. The key observation is that the resolution information of the first receiver is correlated with that of the second. This correlated information is efficiently transmitted via joint source-channel coding, using ideas similar to the Han-Costa coding scheme. Using the result, we obtain an achievable rate region for the stochastically degraded AWGN broadcast channel with noisy feedback from only one receiver. It is shown that this region is strictly larger than the no-feedback capacity region.Comment: To appear in IEEE Transactions on Information Theory. Contains example of AWGN Broadcast Channel with noisy feedbac

    Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding

    Full text link
    We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance σ2\sigma^2 with (squared-error) distortion DD can compress any ergodic source of variance less than σ2\sigma^2 to within distortion DD. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.Comment: This version corrects a typo in the statement of Theorem 2 of the published pape

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    PCA Initialization for Approximate Message Passing in Rotationally Invariant Models

    Full text link
    We study the problem of estimating a rank-11 signal in the presence of rotationally invariant noise-a class of perturbations more general than Gaussian noise. Principal Component Analysis (PCA) provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime. Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA. However, the existing analysis of AMP requires an initialization that is both correlated with the signal and independent of the noise, which is often unrealistic in practice. In this work, we combine the two methods, and propose to initialize AMP with PCA. Our main result is a rigorous asymptotic characterization of the performance of this estimator. Both the AMP algorithm and its analysis differ from those previously derived in the Gaussian setting: at every iteration, our AMP algorithm requires a specific term to account for PCA initialization, while in the Gaussian case, PCA initialization affects only the first iteration of AMP. The proof is based on a two-phase artificial AMP that first approximates the PCA estimator and then mimics the true AMP. Our numerical simulations show an excellent agreement between AMP results and theoretical predictions, and suggest an interesting open direction on achieving Bayes-optimal performance.Comment: 72 pages, 2 figures, appeared in Neural Information Processing Systems (NeurIPS), 202
    corecore